65 research outputs found

    QDC: Quantum Diffusion Convolution Kernels on Graphs

    Full text link
    Graph convolutional neural networks (GCNs) operate by aggregating messages over local neighborhoods given the prediction task under interest. Many GCNs can be understood as a form of generalized diffusion of input features on the graph, and significant work has been dedicated to improving predictive accuracy by altering the ways of message passing. In this work, we propose a new convolution kernel that effectively rewires the graph according to the occupation correlations of the vertices by trading on the generalized diffusion paradigm for the propagation of a quantum particle over the graph. We term this new convolution kernel the Quantum Diffusion Convolution (QDC) operator. In addition, we introduce a multiscale variant that combines messages from the QDC operator and the traditional combinatorial Laplacian. To understand our method, we explore the spectral dependence of homophily and the importance of quantum dynamics in the construction of a bandpass filter. Through these studies, as well as experiments on a range of datasets, we observe that QDC improves predictive performance on the widely used benchmark datasets when compared to similar methods

    A correlated-polaron electronic propagator: open electronic dynamics beyond the Born-Oppenheimer approximation

    Full text link
    In this work we develop a theory of correlated many-electron dynamics dressed by the presence of a finite-temperature harmonic bath. The theory is based on the ab-initio Hamiltonian, and thus well-defined apart from any phenomenological choice of collective basis states or electronic coupling model. The equation-of-motion includes some bath effects non-perturbatively, and can be used to simulate line- shapes beyond the Markovian approximation and open electronic dynamics which are subjects of renewed recent interest. Energy conversion and transport depend critically on the ratio of electron-electron coupling to bath-electron coupling, which is a fitted parameter if a phenomenological basis of many-electron states is used to develop an electronic equation of motion. Since the present work doesn't appeal to any such basis, it avoids this ambiguity. The new theory produces a level of detail beyond the adiabatic Born-Oppenheimer states, but with cost scaling like the Born-Oppenheimer approach. While developing this model we have also applied the time-convolutionless perturbation theory to correlated molecular excitations for the first time. Resonant response properties are given by the formalism without phenomenological parameters. Example propagations with a developmental code are given demonstrating the treatment of electron-correlation in absorption spectra, vibronic structure, and decay in an open system.Comment: 25 pages 7 figure

    Graph Neural Networks as Gradient Flows: understanding graph convolutions via energy

    Full text link
    Gradient flows are differential equations that minimize an energy functional and constitute the main descriptors of physical systems. We apply this formalism to Graph Neural Networks (GNNs) to develop new frameworks for learning on graphs as well as provide a better theoretical understanding of existing ones. We derive GNNs as a gradient flow equation of a parametric energy that provides a physics-inspired interpretation of GNNs as learning particle dynamics in the feature space. In particular, we show that in graph convolutional models (GCN), the positive/negative eigenvalues of the channel mixing matrix correspond to attractive/repulsive forces between adjacent features. We rigorously prove how the channel-mixing can learn to steer the dynamics towards low or high frequencies, which allows to deal with heterophilic graphs. We show that the same class of energies is decreasing along a larger family of GNNs; albeit not gradient flows, they retain their inductive bias. We experimentally evaluate an instance of the gradient flow framework that is principled, more efficient than GCN, and achieves competitive performance on graph datasets of varying homophily often outperforming recent baselines specifically designed to target heterophily.Comment: First two authors equal contribution; 39 page

    Graph Neural Networks for Link Prediction with Subgraph Sketching

    Full text link
    Many Graph Neural Networks (GNNs) perform poorly compared to simple heuristics on Link Prediction (LP) tasks. This is due to limitations in expressive power such as the inability to count triangles (the backbone of most LP heuristics) and because they can not distinguish automorphic nodes (those having identical structural roles). Both expressiveness issues can be alleviated by learning link (rather than node) representations and incorporating structural features such as triangle counts. Since explicit link representations are often prohibitively expensive, recent works resorted to subgraph-based methods, which have achieved state-of-the-art performance for LP, but suffer from poor efficiency due to high levels of redundancy between subgraphs. We analyze the components of subgraph GNN (SGNN) methods for link prediction. Based on our analysis, we propose a novel full-graph GNN called ELPH (Efficient Link Prediction with Hashing) that passes subgraph sketches as messages to approximate the key components of SGNNs without explicit subgraph construction. ELPH is provably more expressive than Message Passing GNNs (MPNNs). It outperforms existing SGNN models on many standard LP benchmarks while being orders of magnitude faster. However, it shares the common GNN limitation that it is only efficient when the dataset fits in GPU memory. Accordingly, we develop a highly scalable model, called BUDDY, which uses feature precomputation to circumvent this limitation without sacrificing predictive performance. Our experiments show that BUDDY also outperforms SGNNs on standard LP benchmarks while being highly scalable and faster than ELPH.Comment: 29 pages, 19 figures, 6 appendice
    • …
    corecore